Nacer Farajzadeh; Hiwa Ebrahimzadeh
Abstract
The development of automatic road and building detection systems in aerial imagery are always faced with challenges such as the appearance of buildings, illumination changes, imaging angles, and the density of roads and buildings in urban areas, to name a few. In recent years, employing multi-layered ...
Read More
The development of automatic road and building detection systems in aerial imagery are always faced with challenges such as the appearance of buildings, illumination changes, imaging angles, and the density of roads and buildings in urban areas, to name a few. In recent years, employing multi-layered approach in artificial neural networks, known as deep neural networks, has attracted many researchers in this field (and the other fields alike), achieving stunning results. However, the use of fully connected layers in this approach, significantly increases the average processing time and results in an overfitted model. In addition, in most of these methods, a single-class approach has been considered. That is, detecting the roads and the buildings from natural scenes is not possible at the same time, and therefore, it is necessary to build separate binary models for each of them. The main goal of this research is to design a new architecture by which the produced model can be able to simultaneously detect roads and buildings from natural scenes, and thus minimizing the complexity of the classification process. In addition, in the proposed architecture, excluding all fully connected layers from the traditional multi-layered architectures is considered in order to reduce the average processing time. The results of the experiments performed on the Massachusetts dataset, show that the proposed architecture performs 38% faster than the other deep neural network-based methods, and also increases the accuracy by an average of 2%.
Naser Farajzadeh; Mehdi Hashemzadeh
Abstract
Generally, the photos captured by drones and satellites include both natural scenes and man-made objects. Having these two categories classified, we will be able to extract important information from aerial scenes such as the shapes and the alignments of the structures and then, create labeled aerial ...
Read More
Generally, the photos captured by drones and satellites include both natural scenes and man-made objects. Having these two categories classified, we will be able to extract important information from aerial scenes such as the shapes and the alignments of the structures and then, create labeled aerial images accordingly. Obtaining such information is of great interest in, for example, military, urban, and environmental protection applications. However, due to a huge amount of data that is collected in form of images, it seems that manually processing of such data is impossible. Therefore, employing automatic techniques based on artificial intelligence has become more on demand. There are numerous researches on this topic from which detection of buildings, vehicles, roads, and vegetation are of more interest. In this paper, we aim to introduce a method to detect man-made objects in aerial images based on a new set of color statistical features, which can be easily extracted, together with a learning model. Experimental results on a publicly available dataset, Massachusetts dataset, have shown promising results in terms of both accuracy and processing time; the accuracy and the average processing time are 90.07% and 0.96 seconds, respectively.